Dell M1000e
   HOME

TheInfoList



OR:

The
Dell Dell is an American based technology company. It develops, sells, repairs, and supports computers and related products and services. Dell is owned by its parent company, Dell Technologies. Dell sells personal computers (PCs), servers, data ...
blade server A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, whil ...
products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic
iSCSI Internet Small Computer Systems Interface or iSCSI ( ) is an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP ...
storage area network A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from serve ...
and I/O modules including
Ethernet Ethernet () is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 198 ...
,
Fibre Channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
and
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used ...
switches.


Enclosure

The M1000e fits in a
19-inch rack A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is wide. The 19 inch dimension includes the edges or "ears" that protrude from each side of the equ ...
and is 10
rack unit A rack unit (abbreviated U or RU) is a unit of measure defined as . It is most frequently used as a measurement of the overall height of 19-inch and 23-inch rack frames, as well as the height of equipment that mounts in these frames, whereby th ...
s high (44 cm), 17.6" (44.7 cm) wide and 29.7" (75.4 cm) deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg.Dell websit
Tech specs for the M1000e
visited 10 March 2013
On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and the
KVM switch A KVM switch (with KVM being an abbreviation for "keyboard, video, and mouse") is a hardware device that allows a user to control multiple computers from one or more sets of keyboards, video monitors, and mice. Name Switches to connect mu ...
. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server. In June 2013 Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors. In 2018 Dell introduced the Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures. The M1000e enclosure has a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a
backplane A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used as a backbo ...
but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules.


Midplane

The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back.Dell support websit
M1000e owners manual
retrieved 26 October 2012

The original midplane 1.0 capabilities are Fabric A - Ethernet 1Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb - IfiniBand DDR, QDR, FDR10. The enhanced midplane 1.1 capabilities are Fabric A - Ethernet 1Gb, 10Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb, 16Gb - IfiniBand DDR, QDR, FDR10, FDR. The original M1000e enclosures came with midplane version 1.0 but that midplane did not support the
10GBASE-KR 10 Gigabit Ethernet (10GE, 10GbE, or 10 GigE) is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10  gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Et ...
standard on fabric A (
10GBASE-KR 10 Gigabit Ethernet (10GE, 10GbE, or 10 GigE) is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10  gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Et ...
standard is supported on fabrics B&C). To have 10Gb Ethernet on fabric A or 16Gb
Fibre Channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
or
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used ...
FDR (and faster) on fabrics B&C, midplane 1.1 is required. Current versions of the enclosure come with midplane 1.1 and it is possible to upgrade the midplane. Via the markings on the back-side of the enclosure, just above the I/O modules: if an "arrow down" can be seen above the 6 I/O slots the 1.0 midplane was installed in the factory; if there are 3 or 4 horizontal bars, midplane 1.1 was installed. As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC management interface actual installed version of the midplane is visible


Front:Blade servers

Each M1000e enclosure can hold up to 32 quarter-height, 16 half-height blades or 8 full-height or combinations (e.g. 1 full-height + 14 half-height). The slots are numbered 1-16 where 1-8 are the ''upper'' blades and 9-16 are directly beneath 1-8. When using full-height blades one use slot n (where n=1 to 8) and slot n+8 Integrated at the bottom of the front-side is a connection-option for 2 x
USB Universal Serial Bus (USB) is an industry standard that establishes specifications for cables, connectors and protocols for connection, communication and power supply (interfacing) between computers, peripherals and other computers. A broad ...
, meant for a mouse and keyboard, as well as a standard
VGA Video Graphics Array (VGA) is a video display controller and accompanying de facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the PC industry within three years. The term can no ...
monitor connection (15 pin). Next to this is a power-button with power-indication. Next to this is a small LCD screen with navigation buttons which allows one to get system-information without the need to access the CMC/management system of the enclosure. Basic status and configuration information is available via this display. To operate the display one can pull it towards one and tilt it for optimal view and access to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal operation and an orange LED indicating a problem of some kind. This LCD display can also be used for the initial configuration wizard in a newly delivered (unconfigured) system, allowing the operator to configure the CMC IP address.


Back:power, management and I/O

All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: here one insert the 3 management-modules: one or two CMC modules and an optional i KVM module. At the bottom of the enclosure there are 6 bays for power-supply units. A standard M1000e operates with three PSU's The area in between offers 3 x 3 bays for cooling-fans (left - middle - right) and up to 6 I/O modules: three modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right hand side has places for A2, B2 and C2. The A fabric I/O modules connect to the on-board I/O controllers which in most cases will be a dual 1Gb or 10Gb Ethernet NIC. When the blade has a dual port on-board 1Gb NIC the first NIC will connect to the I/O module in fabric A1 and the 2nd NIC will connect to fabric A2 (and the blade-slot corresponds with the internal Ethernet interface: e.g. the on-board NIC in slot 5 will connect to interface 5 of fabric A1 and the 2nd on-board NIC goes to interface 5 of fabric A2) I/O modules in fabric B1/B2 will connect to the (optional) Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3. All modules can be inserted or removed on a running enclosure (
Hot swapping Hot swapping is the replacement or addition of components to a computer system without stopping, shutting down, or rebooting the system; hot plugging describes the addition of components only. Components which have such functionality are said ...
)


Available server-blades

An M1000e holds up to 32 quarter-height, 16 half-height blades or 8 full-height blades or a mix of them (e.g. 2 full height + 12 half-height). The 1/4 height blades require a full-size sleeve to install. The current list are the currently available 11G blades and the latest generation 12 models. There are also older blades like the M605, M805 and M905 series.


Power Edge M420

Released in 2012, PE M420 is a "quarter-size" blade: where most servers are 'half-size', allowing 16 blades per M1000e enclosure, with the new M420 up to 32 blade servers can be installed in a single chassis. Implementing the M420 has some consequences for the system: many people have reserved 16 IP addresses per chassis to support the "automatic IP address assignment" for the iDRAC management card in a blade, but as it is now possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server one needs to run CMC firmware 4.1 or later and one needs a full-size "sleeve" that holds up to four M420 blades. It also has consequences for the "normal" I/O NIC assignment: most (half-size) blades have two LOMs (LAN On Motherboard): one connecting to the switch in the A1 fabric, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All available I/O modules (except for the PCM6348, MXL an MIOA) have 16 internal ports: one for each half-size blade. As an M420 has two 10 Gb LOM NICs, a fully loaded chassis would require 2 × 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine card (Mezzanine B OR Mezzanine C depending on their location) whereas all half-height and full-height systems support two Mezzanine cards. To support all on-board NICs one would need to deploy a 32-slot Ethernet switch such as the MXL or Force10 I/O Aggregator. But for the Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are "load-balanced" between the B and C-fabric of the M1000e: the Mezzanine card in "slot A" (top slot in the sleeve) connects to Fabric C while "slot B" (the second slot from the top) connects to fabric B, and that is then repeated for C and D slots in the sleeve.


Power Edge M520

A half-height server with up to 2x 8 core Intel Xeon E5-2400 CPU, running the Intel C600 chipset and offering up to 384 Gb RAM memory via 12 DIMM slots. Two on-blade disks (2.5-inch PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O.Dell website: Poweredg
M630 Technical specifications
visited 29 August 2016.
The M520 can also be used in the PowerEdge VRTX system.


Power Edge M600

A half-height server with a Quad-Core Intel Xeon and 8 DIMM slots for up to 64GB RAM


Power Edge M610

A half-height server with a quad-core or six-core
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
5500 or 5600
Xeon Xeon ( ) is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeted at the non-consumer workstation, server, and embedded system markets. It was introduced in June 1998. Xeon processors are based on the same arc ...
CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5-inch hard-disks or SSDs and a choice of built-in NICs for Ethernet or
converged network adapter A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words ...
(CNA), Fibre Channel or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video card


Power Edge M610x

A full-height blade server that has the same capabilities as the half-height M610 but offering an expansion module containing x16 PCI Express (PCIe) 2.0 expansion slots that can support up to two standard full-length/full-height PCIe cards.


Power Edge M620

A half-height server with up to 2x 12 core Intel Xeon E5-2600 or Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM memory via 24 DIMM slots. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage with a range of RAID controller options. Two external and one internal USB ports and two SD card slots. The blades can come pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL. It can also be ordered with Citrix XenServer or VMWare vSphere ESXi or using Hyper-V which comes with W2K8 R2. According to the vendor all Generation 12 servers are optimized to run as virtualisation platform. Out-of-band management is done via iDRAC 7 via the CMC.


Power Edge M630

A half-height server with up to 2x 22-core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM memory via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145w CPUs. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M630 can also be used in the PowerEdge VRTX system. Amulet HotKey offers a modified M630 server that can be fitted with a GPU or Teradici PCoIP Mezzanine module.


Power Edge M640

A half-height server with up to 2x 28-core Xeon Scalable CPU. Supported on both the M1000e and PowerEdge VRTX chassis. The server can support up to 16 DDR4 RDIMM memory slots for up to 1024 GB RAM and 2 drive bays supporting SAS / SATA or NVMe drives (with an adapter). The server uses iDRAC 9.


Power Edge M710

A full-height server with a quad-core or six-core
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
5500 or 5600
Xeon Xeon ( ) is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeted at the non-consumer workstation, server, and embedded system markets. It was introduced in June 1998. Xeon processors are based on the same arc ...
CPU and up to 192 Gb RAM. A maximum of four on-blade hot-pluggable 2.5" hard-disks or SSD's and a choice of built-in NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The video card is a Matrox G200.The server has the Intel 5520 chipset


Power Edge M710HD

A two-socket version of the M710 but now in a half-height blade. CPU can be two quad-core or 6-core Xeon 5500 or 5600 with the Intel 5520 chipset. Via 18 DIMM slots up to 288 Gb DDR3 RAM can put on this blade and the standard choice of on-board Ethernet NICs based on Broadcom or Intel and one or two Mezzanine cards for Ethernet, Fibre Channel or InfiniBand.


Power Edge M820

A full-height server with 4x 8 core Intel Xeon E5-4600 CPU, running the Intel C600 chipset and offering up to 1.5 TB RAM memory via 48 DIMM slots. Up to four on-blade 2,5" SAS HDD/SSD or two PCIe flash SSD are installable for local storage. The M820 offers a choice of 3 different on-board converged Ethernet adaptors for 10 Gbit/s
Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel ...
(FCoE) from Broadcom,
Brocade Brocade is a class of richly decorative shuttle-woven fabrics, often made in colored silks and sometimes with gold and silver threads. The name, related to the same root as the word "broccoli", comes from Italian ''broccato'' meaning "embos ...
or QLogic and up to two additional Mezzanine for Ethernet, Fibre Channel or InfiniBand I/ODell website: Poweredg
M820 Technical specifications
visited 28 July 2012


Power Edge M910

A full-height server of the 11th generation with up to 4x 10-core Intel XEON E7 CPU or 4 x 8 core XEON 7500 series or 2 x 8 core XEON 6500 series, 512 Gb or 1Tb DDR3 RAM and two hot-swappable 2,5" hard-drives (spinning or SSD). It uses the Intel E 7510 chipset. A choice of built-in NICs for Ethernet, Fibre Channel or InfiniBand


Power Edge M915

Also a full-height 11G server using the AMD Opteron 6100 or 6200 series CPU with the AMD SR5670 and SP5100 chipset. Memory via 32 DDR3 DIMM slots offering up to 512Gb RAM. On-board up to two 2,5 inch HDD or SSD's. The blade comes with a choice of on-board NICs and up to two mezzanine cards for dual-port 10Gb Ethernet, dual-port FCoE, dual-port 8Gb fibre-channel or dual port Mellanox Infiniband. Video is via the on-board Matrox G200eW with 8MB memory


Mezzanine cards

Each server comes with Ethernet NICs on the motherboard. These 'on board' NICs connect to a switch or pass-through module inserted in the A1 or the A2 bay at the back of the switch. To allow more NICs or non-Ethernet I/O each blade has two so-called ''mezzanine'' slots: slot B connecting to the switches/modules in bay B1 and B2 and slot C connecting to C1/C2: An M1000e chassis holds up to 6 switches or pass-through modules. For redundancy one would normally install switches in pairs: the switch in bay A2 is normally the same as the A1 switch and connects the blades on-motherboard NICs to connect to the data or storage network.


(Converged) Ethernet Mezzanine cards

Standard blade-servers have one or more built-in NICs that connect to the 'default' switch-slot (the ''A-fabric'') in the enclosure (often blade-servers also offer one or more external NIC interfaces at the front of the blade) but if one want the server to have more physical (internal) interfaces or connect to different switch-blades in the enclosure one can place extra mezzanine cards on the blade. The same applies to adding a
Fibre Channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
host bus adapter or a
Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel ...
(FCoE)
converged network adapter A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words ...
interface. Dell offers the following (converged) Ethernet mezzanine cards for their PowerEdge blades:Dell support site with a
Overview manuals for the M1000e chassis
visited 27 June 2011
* Broadcom 57712 dual-port CNA * Brocade BR1741M-k CNA *
Mellanox Mellanox Technologies Ltd. ( he, מלאנוקס טכנולוגיות בע"מ) was an Israeli-American multinational supplier of computer networking products based on InfiniBand and Ethernet technology. Mellanox offered adapters, switches, softwa ...
ConnectX-2 dual 10Gb card * Intel dual port 10Gb Ethernet * Intel Quad port Gigabit Ethernet * Intel Quad port Gigabit Ethernet with virtualisation technology and iSCSI acceleration features * Broadcom NetXtreme II 5709 dual- and quad-port Gigabit Ethernet (dual port with iSCSI offloading features) * Broadcom NetXtreme II 5711 dual port 10Gb Ethernet with iSCSI offloading features


Non-Ethernet cards

Apart from the above the following mezzanine cards are available: * Emulex LightPulse LPe1105-M4
Host adapter In computer hardware, a host controller, host adapter, or host bus adapter (HBA), connects a computer system bus, which acts as the host system, to other network and storage devices. The terms are primarily used to refer to devices for conne ...
* Mellanox ConnectX IB MDI Dual-Port
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used ...
Mezzanine Card * QLogic SANblade HBA * SANsurfer Pro


Blade storage

In most setups the server-blades will use external storage (
NAS Nas (born 1973) is the stage name of American rapper Nasir Jones. Nas, NaS, or NAS may also refer to: Aviation * Nasair, a low-cost airline carrier and subsidiary based in Eritrea * National Air Services, an airline in Saudi Arabia ** Nas Air ( ...
using
iSCSI Internet Small Computer Systems Interface or iSCSI ( ) is an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP ...
,
FCoE Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel ...
or
Fibre Channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
) in combination with local server-storage on each blade via
hard disk drive A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnet ...
s or SSDs on the blades (or even only a SD-card with boot-OS like
VMware ESX VMware ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that is installed on an operating system (OS); ...
). It is also possible to use completely diskless blades that boot via PXE or external storage. But regardless of the local and boot-storage: the majority of the data used by blades will be stored on SAN or
NAS Nas (born 1973) is the stage name of American rapper Nasir Jones. Nas, NaS, or NAS may also refer to: Aviation * Nasair, a low-cost airline carrier and subsidiary based in Eritrea * National Air Services, an airline in Saudi Arabia ** Nas Air ( ...
external from the blade-enclosure.


EqualLogic Blade-SAN

Dell has put the EqualLogic PS M4110 models of
iSCSI Internet Small Computer Systems Interface or iSCSI ( ) is an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP ...
storage arraysTechnical specifications of th
Equallogic PS M4110 blade array
visited 27 September 2012
that are physically installed in the M1000e chassis: this SAN will take the same space in the enclosure as two half-height blades next to each other. Apart from the form factor (the physical size, getting power from the enclosure system etc.) it is a "normal" iSCSI SAN: the blades in the (same) chassis communicate via Ethernet and the system does require an accepted Ethernet blade-switch in the back (or a pass-through module + rack-switch): there is no option for direct communication of the server-blades in the chassis and the M4110: it only allows a user to pack a complete mini-datacentre in a single enclosure (19" rack, 10 RU) Depending on the model and used disk driver the PS M4110 offers a system (raw) storage capacity between 4.5 TB (M4110XV with 14 × 146 Gb, 15K SAS HDD) and 14 TB (M4110E with 14 x 1TB, 7,2K SAS HDD). The M4110XS offer 7.4TB using 9 HDD's and 5 SSD's.Dell datasheet for th
PS-M4110
downloaded: 2 March 2013
Each M4110 comes with one or two controllers and two 10-gigabit Ethernet interfaces for iSCSI. The management of the SAN goes via the chassis-management interface (CMC). Because the iSCSI uses 10 Gb interfaces the SAN should be used in combination with one of the 10G blade switches: the PCM 8024-k or the Force10 MXL switch. The enclosure's mid-plane hardware version should be at least version 1.1 to support 10Gb KR connectivityUsing M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application
retrieved 12 June 2012
How to find midplane revision of M1000e
visited 19 September 2012


PowerConnect switches

At the rear side of the enclosure one will find the power-supplies, fan-trays, one or two chassis-management modules (the CMC's) and a virtual
KVM switch A KVM switch (with KVM being an abbreviation for "keyboard, video, and mouse") is a hardware device that allows a user to control multiple computers from one or more sets of keyboards, video monitors, and mice. Name Switches to connect mu ...
. And the rear offers 6 bays for I/O modules numbered in 3 pairs: A1/A2, B1/B2 and C1/C2. The ''A'' bays connect the on-motherboard NICs to external systems (and/or allowing communication between the different blades within one enclosure). The
Dell PowerConnect ''The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portf ...
switches are modular switches for use in the Dell
blade server A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, whil ...
enclosure M1000e. The M6220, M6348, M8024 and M8024K are all switches in the same family, based on the same fabrics (
Broadcom Broadcom Inc. is an American designer, developer, manufacturer and global supplier of a wide range of semiconductor and infrastructure software products. Broadcom's product offerings serve the data center, networking, software, broadband, wirel ...
) and running the same firmware-version.PowerConnect M-serie
User Guide, firmware 4.x
March 2011, retrieved 26 June 2011
All the M-series switches are OSI layer 3 capable: so one can also say that these devices are
layer 2 The data link layer, or layer 2, is the second layer of the seven-layer OSI model of computer networking. This layer is the protocol layer that transfers data between nodes on a network segment across the physical layer. The data link layer pr ...
Ethernet switches with built-in router or layer3 functionality. The most important difference between the M-series switches and the
Dell PowerConnect ''The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portf ...
''classic'' switches (e.g. the 8024 model) is the fact that most interfaces are ''internal'' interfaces that connect to the blade-servers via the midplane of the enclosure. Also the M-series can't be running outside the enclosure: it will only work when inserted in the enclosure.


PowerConnect M6220

This is a 20-port switch: 16 internal and 4 external Gigabit Ethernet interfaces and the option to extend it with up to four 10Gb external interfaces for uplinks or two 10Gb uplinks and two stacking ports to stack several PCM6220's into one large logical switch.


PowerConnect M6348

This is a 48 port switch: 32 internal 1Gb interfaces (two per serverblade) and 16 external copper (RJ45) gigabit interfaces. There are also two SFP+ slots for 10Gb uplinks and two CX4 slots that can either be used for two extra 10Gb uplinks or to stack several M6348's blades in one logical switch. The M6348 offers four 1Gb interfaces to each blade which means that one can only utilize the switch to full capacity when using blades that offer 4 internal NICs on the A fabric (=the internal/on motherboard NIC). The M6348 can be stacked with other M6348 but also with the PCT7000 series rack-switches.


PowerConnect M8024 and M8024k

The M8024 and M8024-k offer 16 internal autosensing 1 or 10 Gb interfaces and up to 8 external ports via one or two I/O modules each of which can offer: 4 × 10Gb SFP+ slots, 3 x CX4 10Gb (only) copper or 2 x 10G BaseT 1/10 Gb RJ-45 interfaces. The PCM8024 is 'end of sales' since November 2011 and replaced with the PCM8024-k. Since firmware update 4.2 the PCM8024-k supports partially
FCoE Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel ...
via FIP (FCoE Initialisation Protocol) and thus
Converged network adapter A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words ...
s but unlike the PCM 8428-k it has no native
fibre channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
interfaces. Also since firmware 4.2 the PCM8024-k can be stacked using external 10Gb Ethernet interfaces by assigning them as stacking ports. Although this new stacking-option is also introduced in the same firmware release for the PCT8024 and PCT8024-f one can't stack blade (PCM) and rack (PCT)-versions in a single stack. The new features are not available on the 'original' PCM8024. Firmware 4.2.x for the PCM8024 only corrected bugs: no new features or new functionality are added to 'end of sale' models.Release notes page 6 and further included in firmware packag
PC 4.2.1.3
release-date 2 February 2012, downloaded: 16 February 2012
To use the PCM8024-k switches one will need the backplane that supports the KR or IEEE 802.3ap standards


Powerconnect capabilities

All PowerConnect M-series ("PCM") switches are multi-layer switches thus offering both layer 2 (Ethernet) options as well as layer 3 or IP routing options.
Depending on the model the switches offer internally 1Gbit/s or 10Gbit/s interfaces towards the blades in the chassis. The PowerConnect M series with "-k" in the model-name offer 10Gb internal connections using the
10GBASE-KR 10 Gigabit Ethernet (10GE, 10GbE, or 10 GigE) is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10  gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Et ...
standard. The external interfaces are mainly meant to be used as uplinks or stacking-interfaces but can also be used to connect non-blade servers to the network.
On the link-level PCM switches support
link aggregation In computer networking, link aggregation is the combining ( aggregating) of multiple network connections in parallel by any of several methods, in order to increase throughput beyond what a single connection could sustain, to provide redundan ...
: both static LAG's as well as LACP. As all PowerConnect switches the switches are running RSTP as
Spanning Tree Protocol The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also al ...
, but it is also possible to run MSTP or Multiple Spanning Tree. The internal ports towards the blades are by default set as edge or "portfast" ports. Another feature is to use link-dependency. One can, for example, configure the switch that all internal ports to the blades are shut down when the switch gets isolated because it loses its uplink to the rest of the network.
All PCM switches can be configured as pure layer-2 switches or they can be configured to do all routing: both routing between the configured VLAN's as external routing. Besides static routes the switches also support
OSPF Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous sys ...
and
RIP Rest in peace (RIP), a phrase from the Latin (), is sometimes used in traditional Christian services and prayers, such as in the Catholic, Lutheran, Anglican, and Methodist denominations, to wish the soul of a decedent eternal rest and peace. ...
routing. When using the switch as routing switch one need to configure vlan interfaces and assign an IP address to that vlan interface: it is not possible to assign an IP address directly to a physical interface.


Stacking

All PowerConnect blade switches, except for the ''original'' PC-M8024, can be stacked. To stack the ''new'' PC-M8024-k switch the switches need to run firmware version 4.2 or higher. In principle one can only stack switches of the same family; thus stacking multiple PCM6220's together or several PCM8024-k. The only exception is the capability to stack the blade PCM6348 together with the rack-switch PCT7024 or PCT7048. Stacks can contain multiple switches within one M1000e chassis but one can also stack switches from different chassis to form one logical switch.


Force10 switches


MXL 10/40 Gb switch

At the Dell Interop 2012 in
Las Vegas Las Vegas (; Spanish for "The Meadows"), often known simply as Vegas, is the 25th-most populous city in the United States, the most populous city in the state of Nevada, and the county seat of Clark County. The city anchors the Las Vegas ...
Dell announced the first
FTOS FTOS or Force10 Operating System is the firmware family used on Force10 Ethernet switches. It has a similar functionality as Cisco's NX-OS or Juniper's Junos. FTOS 10 is running on Debian. As part of a re-branding strategy of Dell FTOS will be re ...
based blade-switch: the
Force10 Dell Force10 (formerly nCore Networks, Force10 Networks), was a United States company that developed and marketed 10 Gigabit and 40 Gigabit Ethernet switches for computer networking to corporate, educational, and governmental customers. It had ...
MXL 10/40Gpbs blade switch, and later a 10/40Gbit/s concentrator. The FTOS MXL 40 Gb was introduced on 19 July 2012.Dell community website
Dell announces F10 MXL switch
24 April 2012. Visited 18 May 2012
The MXL provides 32 internal 10Gbit/s links (2 ports per blade in the chassis), two QSFP+ 40Gbit/s ports and two empty expansion slots allowing a maximum of 4 additional QSFP+ 40Gbit/s ports or 8 10Gbit/s ports. Each QSFP+ port can be used for a 40Gbit/s switch to switch (stack) uplink or, with a break-out cable, 4 x 10Gbit/s links. Dell offers ''direct attach'' cables with on one side the QSFP+ interface and 4 x SFP+ on the other end or a QSFP+ transceiver on one end and 4 fibre-optic pairs to be connected to SFP+ transceivers on the other side. Up to six MXL blade-switch can be stacked into one logical switch. Besides the above 2x40 QSFP module the MXL also supports a 4x10Gb SFP+ and a 4x10GbaseT module. All ethernet extension modules for the MXL can also be used for the rack based N4000 series (fka Power connector 8100). The MXL switches also support Fibre Channel over Ethernet so that server-blades with a
converged network adapter A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words ...
Mezzanine card can be used for both data as storage using a Fibre Channel storage system. The MXL 10/40 Gbit/s blade switch will run
FTOS FTOS or Force10 Operating System is the firmware family used on Force10 Ethernet switches. It has a similar functionality as Cisco's NX-OS or Juniper's Junos. FTOS 10 is running on Debian. As part of a re-branding strategy of Dell FTOS will be re ...
and because of this will be the first M1000e I/O product without a Web
graphical user interface The GUI ( "UI" by itself is still usually pronounced . or ), graphical user interface, is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, inste ...
. The MXL can either forward the FCoE traffic to an upstream switch or, using a 4 port 8Gb FC module, perform the FCF function, connecting the MXL to a full FC switch or directly to a FC SAN.


I/O Aggregator

In October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on
FTOS FTOS or Force10 Operating System is the firmware family used on Force10 Ethernet switches. It has a similar functionality as Cisco's NX-OS or Juniper's Junos. FTOS 10 is running on Debian. As part of a re-branding strategy of Dell FTOS will be re ...
. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard two 40 Gbit/s QSFP+ uplinks and offers two extension slots. Depending on one's requirements one can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. One can assign up to 16 x 10Gb uplinks to one's distribution or core layer. The I/O aggregator supports
FCoE Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel ...
and DCB (
Data center bridging Data center bridging (DCB) is a set of enhancements to the Ethernet local area network communication protocol for use in data center environments, in particular for use with clustering and storage area networks. Motivation Ethernet is the primary ...
) features


Cisco switches

Dell also offered some Cisco Catalyst switches for this blade enclosure. Cisco offers a range of switches for blade-systems from the main vendors. Besides the Dell M1000e enclosure Cisco offers similar switches also for HP, FSC and IBM blade-enclosures.Cisco website
Comprehensive Blade Server I/O Solutions
visited: 14 April 2012
For the Dell M1000e there are two model-ranges for Ethernet switching: (note: Cisco also offers the Catalyst 3030, but this switch is for the ''old'' Generation 8 or Gen 9 blade system, not for the current M1000e enclosure) As per 2017 the only available Cisco I/O device for the M1000e chassis is the Nexus FEX


Catalyst 3032

The Catalyst 3032: a layer 2 switch with 16 internal and 4 external 1Gb Ethernet interfaces with an option to extend to 8 external 1Gb interfaces. The built-in external ports are 10/100/1000 BaseT copper interfaces with an RJ45 connector and up to 4 additional 1Gb ports can be added using the extension module slots that each offer 2
SFP SFP may refer to: Organizations * Salton Inc. (former stock symbol: SFP), now part of Russell Hobbs, Inc. * Swedish People's Party of Finland, a Swedish minority and mainly liberal party in Finland * Syrian Free Press, a Syrian social news networ ...
slots for fiber-optic or Twinax 1Gb links. The Catalyst 3032 doesn't offer stacking (virtual blade switching)Catalyst for Dell at a glance
retrieved: 14 April 2012


Catalyst 3130

The 3130 series switches offer 16 internal 1Gb interfaces towards the blade-servers. For the uplink or external connections there are two options: the 3130G offering 4 built-in 10/100/1000BaseT RJ-45 slots and two module-bays allowing for up to 4 SFP 1Gb slots using SFP transceivers or SFP Twinax cables. The 3130X also offers the 4 external 10/100/1000BaseT connections and two modules for X2 10Gb uplinks. Both 3130 switches offer ' stacking' or 'virtual blade switch'. One can stack up to 8 Catalyst 3130 switches to behave like one single switch. This can simplify the management of the switches and simplify the (spanning tree) topology as the combined switches are just one switch for
spanning tree In the mathematical field of graph theory, a spanning tree ''T'' of an undirected graph ''G'' is a subgraph that is a tree which includes all of the vertices of ''G''. In general, a graph may have several spanning trees, but a graph that is not ...
considerations. It also allows the network manager to aggregate uplinks from physically different switch-units into one logical link. The 3130 switches come standard with IP Base
IOS iOS (formerly iPhone OS) is a mobile operating system created and developed by Apple Inc. exclusively for its hardware. It is the operating system that powers many of the company's mobile devices, including the iPhone; the term also includes ...
offering all layer 2 and the basic layer 3 or routing-capabilities. Users can upgrade this basic license to IP Services or IP Advanced services adding additional routing capabilities such as
EIGRP Enhanced Interior Gateway Routing Protocol (EIGRP) is an advanced distance-vector routing protocol that is used on a computer network for automating routing decisions and configuration. The protocol was designed by Cisco Systems as a proprietary pr ...
,
OSPF Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous sys ...
or BGP4 routing protocols, IPv6 routing and hardware based unicast and multicast routing. These advances features are built into the IOS on the switch, but a user has to upgrade to the ''IP (Advanced) Services'' license to unlock these options


Nexus Fabric Extender

Since January 2013 Cisco and Dell offer a Nexus Fabric Extender for the M1000e chassis: Nexus B22Dell. Such FEX's were already available for HP and Fujitsu blade systems, and now there is also a FEX for the M1000e blade system. The release of the B22Dell is approx. 2,5 years after the initially planned and announced date: a disagreement between Dell and Cisco resulted in Cisco stopping the development of the FEX for the M1000e in 2010.TheRegister website
Cisco cuts Nexus 4001d blade switch
16 February 2010. Visited: 10 March 2013
Customers manage a FEX from a core
Nexus NEXUS is a joint Canada Border Services Agency and U.S. Customs and Border Protection-operated Trusted Traveler and Border control#Expedited border controls, expedited border control program designed for pre-approved, low-risk travelers. Members ...
5500 series switch.


Other I/O cards

An M1000e enclosure can hold up to 6 switches or other I/O cards. Besides the ethernet switches as the Powerconnect M-series, Force10 MXL and Cisco Catalyst 3100 switches mentioned above the following I/O modules are available or usable in a Dell M1000e enclosure: * Ethernet pass-through modules bring internal server-interfaces to an external interface at the back of the enclosure. There are pass-through modules for 1G, 10G-XAUI and 10G 10GbaseXR. All passthrough modules offer 16 internal interfaces linked to 16 external ports on the module. * Emulex 4 or 8 Gb Fibre Channel Passthrough Module * Brocade 5424 8Gb FC switch for
Fibre Channel Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
based
Storage area network A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from serve ...
* Brocade M6505. 16Gb FC switch * Dell 4 or 8Gb Fibre-channel NPIV Port aggregator * Mellanox 2401G and 4001F/Q -
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used ...
Dual Data Rate or Quad Data Rate modules for
High-performance computing High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a mult ...
* Infiniscale 4: 16 port 40Gb Infiniband switch * Cisco M7000e Infiniband switch with 8 external DDR ports * the below Powerconnect 8428-k switch with 4 "native" 8Gb Fibre channel interfaces:


PCM 8428-k Brocade FCoE

Although the PCM8024-k and MXL switch do support Fibre Channel over Ethernet, it is not a 'native' FCoE switch: it has no Fibre Channel interfaces. These switches would need to be connected to a "native" FCoE switch such as the Powerconnect B-series 8000e (same as a Brocade 8000 switch) or a
Cisco Nexus The Cisco Nexus series switches are modular and fixed port network switches designed for the data center. Cisco Systems introduced the Nexus Series of switches on January 28, 2008. The first chassis in the Nexus 7000 family is a 10-slot chassis ...
5000 series switch with fibre channel interfaces (and licenses). The PCM8428 is the only full Fibre Channel over Ethernet capable switch for the M1000e enclosure that offers 16 x enhanced Ethernet 10Gb internal interfaces, 8 x 10Gb (enhanced) Ethernet external ports and also up to four 8Gb Fibre Channel interfaces to connect directly to a FC SAN controller or central Fibre Channel switch.
The switch runs Brocade FC firmware for the fabric and fibre-channel switch and Foundry OS for the Ethernet switch configuration. In capabilities it is very comparable to the Powerconnect-B8000, only the formfactor and number of Ethernet and FC interfaces are different.


PowerConnect M5424 / Brocade 5424

This is a Brocade full Fibre Channel switch. It uses either the B or C fabrics to connect the Fibre Channel mezzanine card in the blades to the FC based storage infrastructure. The M5424 offers 16 internal ports connecting to the FC Mezzanine cards in the blade-servers and 8 external ports. From factory only the first two external ports (17 and 18) are licensed: additional connections require extra Dynamic Ports On Demand (DPOD) licenses. The switch runs on a PowerPC 440EPX processor at 667 MHz and 512 MB DDR2 RAM system memory. Further it has 4 Mb boot flash and 512 Mb compact flash memory on board.


Brocade M6505

Similar capabilities as above, but offering 16 X 16Gb FC towards server mezzanine and 8 external. Standard license offers 12 connections which can be increased by 12 to support all 24 ports. auto-sensing speed 2,4,8 and 16Gb. Total aggregate bandwidth 384 GB


Brocade 4424

As the 5424, the 4424 is also a Brocade SAN I/O offering 16 internal and 8 external ports. The switch supports speeds up to 4 Gbit/s. When delivered 12 of the ports are licensed to be operation and with additional licenses one can enable all 24 ports. The 4424 runs on a PowerPC 440GP processor at 333 MHz with 256 SDRAM system memory, 4 Mb boot flash and 256 Mb compact flash memory.


Infiniband

There are several modules available offering
Infiniband InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used ...
connectivity on the M1000e chassis. Infiniband offers high bandwidth/low-latency intra-computer connectivity such as required in Academic HPC clusters, large enterprise datacenters and cloud applications. There is the SFS M7000e InfiniBand switch from Cisco. The Cisco SFS offers 16 internal 'autosensing' interfaces for single (10) (SDR) or double (20Gbit/s) data rate (DDR) and 8 DDR external/uplink ports. The total switching capacity is 960 Gbit/s Other options are the Mellanox SwitchX M4001F and M4001QMelanox Userguide for th
SwitcX M4001 M4001 Infiniband switches
November, 2011. Retrieved: 12 October 2012
and the Melanox M2401G 20Gb Infiniband switch for the M1000e enclosureMelanox userguide for th
M2401 Infiniband switch
June, 2008. Visited: 12 October 2012
The M4001 switches offer either 40 GBit/s (M4001Q) or the 56 Gbit/s (M4001F) connectivity and has 16 external interfaces using
QSFP Small Form-factor Pluggable connected to a pair of fiber-optic cables Small Form-factor Pluggable (SFP) is a compact, hot-pluggable network interface module format used for both telecommunication and data communications applications. An SFP ...
ports and 16 internal connections to the Infiniband Mezzanine card on the blades. As with all other non-Ethernet based switches it can only be installed in the B or C fabric of the M1000e enclosure as the A fabric connects to the "on motherboard" NICs of the blades and they only come as Ethernet NICs or converged Ethernet. The 2401G offers 24 ports: 16 internal and 8 external ports. Unlike the M4001 switches where the external ports are using QSFP ports for fiber transceivers, the 2401 has CX4 copper cable interfaces. The switching capacity of the M2401 is 960 Gbit/s The 4001, with 16 internal and 16 external ports at either 40 or 56 Gbit/s offers a switching capacity of 2.56 Tbit/s


Passthrough modules

In some setups one don't want or need switching capabilities in one's enclosure. For example: if only a few of the blade-servers do use fibre-channel storage one don't need a fully manageble FC switch: one just want to be able to connect the 'internal' FC interface of the blade directly to one's (existing) FC infrastructure. A pass-through module has only very limited management capabilities. Other reasons to choose for pass-through instead of 'enclosure switches' could be the wish to have all switching done on a 'one vendor' infrastructure; and if that isn't available as an M1000e module (thus not one of the switches from Dell Powerconnect, Dell Force10 or Cisco) one could go for pass-through modules: * 32 port 10/100/1000 Mbit/s gigabit Ethernet pass-through card: connects 16 internal Ethernet interfaces (1 per blade) to an external RJ45 10/100/1000 Mbit/s copper port * 32 port 10 Gb NIC version supports 16 internal 10Gb ports with 16 external SFP+ slots * 32 port 10 Gb CNA version supports 16 internal 10Gb CNA ports with 16 external CNA's10Gb Pass Through Specifications
PDF, retrieved 27 June 2011
* Dell 4 or 8Gb Fibre-channel NPIV Port aggregator * Intel/Qlogic offer a QDR Infiniband passthru module for the Dell M1000e chassis, and a mezzanine version of the QLE7340 QDR IB HCA.


Managing enclosure

An M1000e enclosure offers several ways for management. The M1000e offers 'out of band' management: a dedicated VLAN (or even physical LAN) for management. The CMC modules in the enclosure offer management Ethernet interfaces and do not rely on network-connections made via I/O switches in the blade. One would normally connect the Ethernet links on the CMC avoiding a switch in the enclosure. Often a physically isolated LAN is created for management allowing management access to all enclosures even when the entire infrastructure is down. Each M1000e chassis can hold two CMC modules. Each enclosure can have either one or two CMC controllers and by default one can access the CMC Webgui via
https Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is enc ...
and
SSH The Secure Shell Protocol (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution. SSH applications are based on a ...
for command-line access. It is also possible to access the enclosure management via a serial port for CLI access or using a local keyboard, mouse and monitor via the iKVM switch. It is possible to daisy-chain several M1000e enclosures.


Management interface

Below information assumes the use of the Webgui of the M1000e CMC, although all functions are also available via the text-based CLI access. To access the management system one must open the CMC Webgui via https using the out of band management IP address of the CMC. When the enclosure is in 'stand alone' mode one will get a general overview of the entire system: the webgui gives one an overview how the system looks in reality, including the status-leds etc. By default the Ethernet interface of a CMC card will get an address from a DHCP server but it is also possible to configure an IPv4 or IPv6 address via the LED display at the front of the chassis. Once the IP address is set or known the operator can access the webgui using the default root-account that is built in from factory. Via the CMC management one can configure chassis-related features: management IP addresses, authentication features (local user-list, using RADIUS or Tacacs server), access-options (webgui, cli, serial link, KVM etc.), error-logging ( syslog server), etc. Via the CMC interface one can configure blades in the system and configuring iDRAC access to those servers. Once enabled one can access the iDRAC (and with that the console of the server) via this webgui or directly opening the webgui of the iDRAC. The same applies to the I/O modules in the rear of the system: via the CMC one can assign an IP address to the I/O module in one of the 6 slots and then surf to the webgui of that module (if there is a web-based gui: unmanaged pass-through modules won't offer a webgui as there is nothing to configure.


LCD screen

On the front-side of the chassis there is a small hidden LCD screen with 3 buttons: one 4 way directional button allowing one to navigate through the menus on the screen and two "on/off" push buttons which work as an "OK" or "Escape" button. The screen can be used to check the status of the enclosure and the modules in it: one can for example check active alarms on the system, get the IP address of the CMC of KVM, check the system-names etc. Especially for an environment where there are more enclosures in one datacenter it can be useful to check if one are working on the correct enclosure. Unlike the rack or tower-servers there are only a very limited set of indicators on individual servers: a blade server has a power-led and (local) disc-activity led's but no LCD display offering one any alarms, hostnames etc. Nor are there LED's for I/O activity: this is all combined in this little screen giving one information on both the enclosure as well as information over the inserted servers, switches, fans, power-supplies etc. The LCD screen can also be used for the initial configuration of an unconfigured chassis. One can use the LCD screen to set the interface-language and to set the IP address of the CMC for further CLI or web-based configuration. During normal operation the display can be "pushed" into the chassis and is mainly hidden. To use it one would need to pull it out and tilt it to read the screen and have access to the buttons.


Blade 17: Local management I/O

A blade-system is not really designed for local (on-site) management and nearly all communication with the modules in the enclosure and the enclosure itself are done via the "CMC" card(s) at the back of the enclosure. At the front-side of the chassis, directly adjacent to the power-button, one can connect a local terminal: a standard
VGA Video Graphics Array (VGA) is a video display controller and accompanying de facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the PC industry within three years. The term can no ...
monitor connector and two
USB Universal Serial Bus (USB) is an industry standard that establishes specifications for cables, connectors and protocols for connection, communication and power supply (interfacing) between computers, peripherals and other computers. A broad ...
connectors. This connection is referred to inside the system as 'blade 17' and allows one a local interface to the CMC management cards.


iDRAC remote access

Apart from normal operation access to one's blade servers (e.g. SSH sessions to a Linux-based OS, RDP to a Windows-based OS etc.) there are roughly two ways to manage one's server blades: via the iDRAC function or via the iKVM switch. Each blade in the enclosure comes with a built-in iDRAC that allows one to access the console over an IP connection. The iDRAC on a blade-server works in the same way as an iDRAC card on a rack or tower-server: there is a special iDRAC network to get access to the iDRAC function. In rack or tower-servers a dedicated iDRAC Ethernet interface connects to a management LAN. On blade-servers it works the same: via the CMC one configure the setup of iDRAC and access to the iDRAC of a blade is NOT linked to any of the on-board NICs: if all one's server NICs would be down (thus all the on-motherboard NICs and also the Mezzanine B and C) one can still access the iDRAC.


iKVM: Remote console access

Apart from that, one can also connect a keyboard, mouse and monitor directly to the server: on a rack or tower switch one would either connect the I/O devices when needed or one have all the servers connected to a
KVM switch A KVM switch (with KVM being an abbreviation for "keyboard, video, and mouse") is a hardware device that allows a user to control multiple computers from one or more sets of keyboards, video monitors, and mice. Name Switches to connect mu ...
. The same is possible with servers in a blade-enclosure: via the optional iKVM module in an enclosure one can access each of one's 16 blades directly. It is possible to include the iKVM switch in an existing network of digital or analog KVM switches. The iKVM switch in the Dell enclosure is an
Avocent Avocent, a business of Vertiv, is an information-technology products manufacturer headquartered in Huntsville, Alabama. Avocent formed in 2000 from the merger of the world's two largest manufacturers of KVM (keyboard, video and mouse) equipment ...
switch and one can connect (tier) the iKVM module to other digital KVM switches such as the Dell 2161 and 4161 or Avocent DSR digital switches. Also tiering the iKVM to analog KVM switches as the Dell 2160AS or 180AS or other Avocent (compatible) KVM switches is possible. Unlike the CMC, the iKVM switch is not redundant but as one can always access a server (also) via its iDRAC any outage of the KVM switch doesn't stop one from accessing the server-console.


Flex addresses

The M1000e enclosure offers the option of flex-addresses. This feature allows the system administrators to use dedicated or fixed
MAC address A media access control address (MAC address) is a unique identifier assigned to a network interface controller (NIC) for use as a network address in communications within a network segment. This use is common in most IEEE 802 networking techno ...
es and
World Wide Name A World Wide Name (WWN) or World Wide Identifier (WWID) is a unique identifier used in storage technologies including Fibre Channel, Parallel ATA, Serial ATA, SCSI and Serial Attached SCSI (SAS). A WWN may be employed in a variety of roles, such ...
s (WWN) that are linked to the chassis, the position of the blade and location of the I/O interface. It allows administrators to physically replace a server-blade and/or a Mezzanine card while the system will continue to use the same MAC addresses and/or WWN for that blade without the need to manually change any MAC or WWN addresses and avoid the risk of introducing duplicate addresses: with flex-addresses the system will assign a globally unique MAC/WWN based on the location of that interface in the chassis. The flex-addresses are stored on a SD-card that is inserted in the CMC module of a chassis and when used it overwrites the address burned in into the interfaces of the blades in the system.


Power and cooling

The M1000e enclosure is, as most blade systems, for IT infrastructures demanding high availability. (Nearly) everything in the enclosure supports redundant operation: each of the 3 I/O fabrics (A, B and C) support two switches or pass-through cards and it supports two CMC controllers, even though one can run the chassis with only one CMC. Also power and cooling is redundant: the chassis supports up to six power-supplies and nine fan units. All power supplies and fan-units are inserted from the back and are all hot-swappable. The power supplies are located at the bottom of the enclosure while the fan-units are located next to and in between the switch or I/O modules. Each power-supply is a 2700-watt power-supply and uses 208–240 V AC as input voltage. A chassis can run with at least two power-supplies (2+0 non-redundant configuration). Depending on the required redundancy one can use a 2+2 or 3+3 setup (input redundancy where one would connect each group of supplies to two different power sources) or a 3+1, 4+2 or 5+1 setup, which gives protection if one power-supply unit would fail - but not for losing an entire AC power group


References

{{Dell Inc M1000e Server hardware Computer networking Blade servers Dell Cisco products M1000e